120 research outputs found

    Circuit design and analysis for on-FPGA communication systems

    No full text
    On-chip communication system has emerged as a prominently important subject in Very-Large- Scale-Integration (VLSI) design, as the trend of technology scaling favours logics more than interconnects. Interconnects often dictates the system performance, and, therefore, research for new methodologies and system architectures that deliver high-performance communication services across the chip is mandatory. The interconnect challenge is exacerbated in Field-Programmable Gate Array (FPGA), as a type of ASIC where the hardware can be programmed post-fabrication. Communication across an FPGA will be deteriorating as a result of interconnect scaling. The programmable fabrics, switches and the specific routing architecture also introduce additional latency and bandwidth degradation further hindering intra-chip communication performance. Past research efforts mainly focused on optimizing logic elements and functional units in FPGAs. Communication with programmable interconnect received little attention and is inadequately understood. This thesis is among the first to research on-chip communication systems that are built on top of programmable fabrics and proposes methodologies to maximize the interconnect throughput performance. There are three major contributions in this thesis: (i) an analysis of on-chip interconnect fringing, which degrades the bandwidth of communication channels due to routing congestions in reconfigurable architectures; (ii) a new analogue wave signalling scheme that significantly improves the interconnect throughput by exploiting the fundamental electrical characteristics of the reconfigurable interconnect structures. This new scheme can potentially mitigate the interconnect scaling challenges. (iii) a novel Dynamic Programming (DP)-network to provide adaptive routing in network-on-chip (NoC) systems. The DP-network architecture performs runtime optimization for route planning and dynamic routing which, effectively utilizes the in-silicon bandwidth. This thesis explores a new horizon in reconfigurable system design, in which new methodologies and concepts are proposed to enhance the on-FPGA communication throughput performance that is of vital importance in new technology processes

    Optimizing Nonlinear Dynamics in Energy System Planning and Control

    Get PDF
    Understanding the physical dynamics underlying energy systems is essential in achieving stable operations, and reasoning about restoration and expansion planning. The mathematics governing energy system dynamics are often described by high-order differential equations. Optimizing over these equations can be a computationally challenging exercise. To overcome these challenges, early studies focused on reduced / linearized models failing to capture system dynamics accurately. This thesis considers generalizing and improving existing optimization methods in energy systems to accurately represent these dynamics. We revisit three applications in power transmission and gas pipeline systems. Our first application focuses on power system restoration planning. We examine transient effects in power restoration and generalize the Restoration Ordering Problem formulation with standing phase angle and voltage difference constraints to enhance transient stability. Our new proposal can reduce rotor swings of synchronous generators by over 50\% and have negligible impacts on the blackout size, which is optimized holistically. Our second application focuses on transmission line switching in power system operations. We propose an automatic routine actively considering transient stability during optimization. Our main contribution is a nonlinear optimization model using trapezoidal discretization over the 2-axis generator model with an automatic voltage regulator (AVR). We show that congestion can lead to rotor instability, and variables controlling set-points of automatic voltage regulators are critical to ensure oscillation stability. Our results were validated against PowerWorld simulations and exhibit an average error in the order of 0.001 degrees for rotor angles. Our third contribution focuses on natural gas compressor optimization in natural gas pipeline systems. We consider the Dynamic Optimal Gas Flow problem, which generalizes the Optimal Gas Flow Problem to capture natural gas dynamics in a pipeline network. Our main contribution is a computationally efficient method to minimize gas compression costs under dynamic conditions where deliveries to customers are described by time-dependent mass flows. The scheme yields solutions that are feasible for the continuous problem and practical from an operational standpoint. Scalability of the scheme is demonstrated using realistic benchmark data

    Efficient Dynamic Compressor Optimization in Natural Gas Transmission Systems

    Full text link
    The growing reliance of electric power systems on gas-fired generation to balance intermittent sources of renewable energy has increased the variation and volume of flows through natural gas transmission pipelines. Adapting pipeline operations to maintain efficiency and security under these new conditions requires optimization methods that account for transients and that can quickly compute solutions in reaction to generator re-dispatch. This paper presents an efficient scheme to minimize compression costs under dynamic conditions where deliveries to customers are described by time-dependent mass flow. The optimization scheme relies on a compact representation of gas flow physics, a trapezoidal discretization in time and space, and a two-stage approach to minimize energy costs and maximize smoothness. The resulting large-scale nonlinear programs are solved using a modern interior-point method. The proposed optimization scheme is validated against an integration of dynamic equations with adaptive time-stepping, as well as a recently proposed state-of-the-art optimal control method. The comparison shows that the solutions are feasible for the continuous problem and also practical from an operational standpoint. The results also indicate that our scheme provides at least an order of magnitude reduction in computation time relative to the state-of-the-art and scales to large gas transmission networks with more than 6000 kilometers of total pipeline

    An analytical channel model for emerging wireless networks-on-chip

    Get PDF
    Recently wireless Networks-on-Chip (WiNoCs) have been proposed to overcome the scalability and performance limitations of traditional multi-hop wired NoC architectures. However, the adaptation of wireless technology for on-chip communication is still in its infancy. Consequently, several challenges such as simulation and design tools that consider the technological constraints imposed by the wireless channel are yet to be addressed. To this end, in this paper, we propose and efficient channel model for WiNoCs which takes into account practical issues and constraints of the propagation medium, such as transmission frequency, operating temperature, ambient pressure and distance between the on-chip antennas. The proposed channel model demonstrates that total path loss of the wireless channel in WiNoCs suffers from not only dielectric propagation loss (DPL) but also molecular absorption attenuation (MAA) which reduces the reliability of the system

    Load Embeddings for Scalable AC-OPF Learning

    Full text link
    AC Optimal Power Flow (AC-OPF) is a fundamental building block in power system optimization. It is often solved repeatedly, especially in regions with large penetration of renewable generation, to avoid violating operational limits. Recent work has shown that deep learning can be effective in providing highly accurate approximations of AC-OPF. However, deep learning approaches may suffer from scalability issues, especially when applied to large realistic grids. This paper addresses these scalability limitations and proposes a load embedding scheme using a 3-step approach. The first step formulates the load embedding problem as a bilevel optimization model that can be solved using a penalty method. The second step learns the encoding optimization to quickly produce load embeddings for new OPF instances. The third step is a deep learning model that uses load embeddings to produce accurate AC-OPF approximations. The approach is evaluated experimentally on large-scale test cases from the NESTA library. The results demonstrate that the proposed approach produces an order of magnitude improvements in training convergence and prediction accuracy

    On the nanocommunications at THz band in graphene-enabled Wireless Network-on-Chip

    Get PDF
    One of the main challenges towards the growing computation-intensive applications with scalable bandwidth requirement is the deployment of a dense number of on-chip cores within a chip package. To this end, this paper investigates the Wireless Network-on-Chip (WNoC), which is enabled by graphene-based nanoantennas (GNAs) in Terahertz frequency band. We first develop a channel model between the GNAs taking into account the practical issues of the propagation medium, such as transmission frequency, operating temperature, ambient pressure and distance between the GNAs. In the Terahertz band, not only dielectric propagation loss (DPL) but also molecular absorption attenuation (MAA) caused by various molecules and their isotopologues within the chip package constitute the loss of signal transmission. We further propose an optimal power allocation to achieve the channel capacity subject to transmit power constraint. By analysing the effects of the MAA on the path loss and channel capacity, the proposed channel model shows that the MAA significantly degrades the performance at certain frequency ranges, e.g. 1.21 THz, 1.28 THz and 1.45 THz, of up to 31.8% compared to the conventional channel model, even when the GNAs are very closely located of only 0.01 mm. More specifically, at transmission frequency of 1 THz, the channel capacity of the proposed model is shown to be much lower than that of the conventional model over the whole range of temperature and ambient pressure of up to 26.8% and 25%, respectively. Finally, simulation results are provided to verify the analytical findings

    Compact Optimization Learning for AC Optimal Power Flow

    Full text link
    This paper reconsiders end-to-end learning approaches to the Optimal Power Flow (OPF). Existing methods, which learn the input/output mapping of the OPF, suffer from scalability issues due to the high dimensionality of the output space. This paper first shows that the space of optimal solutions can be significantly compressed using principal component analysis (PCA). It then proposes Compact Learning, a new method that learns in a subspace of the principal components before translating the vectors into the original output space. This compression reduces the number of trainable parameters substantially, improving scalability and effectiveness. Compact Learning is evaluated on a variety of test cases from the PGLib with up to 30,000 buses. The paper also shows that the output of Compact Learning can be used to warm-start an exact AC solver to restore feasibility, while bringing significant speed-ups.Comment: Submitted to IEEE Transactions on Power System
    • …
    corecore